15 - Artificial Intelligence I [ID:54622]
50 von 1382 angezeigt

So the quiz has ended.

One thing I would like to, there's a couple of numbers I want to talk to you about.

Less the people here, but more the streamies and the watchies.

I've just looked. There's about 760 students registered on Stodon.

That typically means that everybody who wants to find out when the course is registered there,

we've had almost one and a half thousand two years ago, and that didn't translate too much.

What worries me a bit is that there are also 760 registered for the exam,

which is not a problem so far, it's a lot of work for grading, but we can do this, we know how to do this.

But quizzes are somewhere between 150 and 200 at the moment.

And there's about 35 to 50 or so in class.

And if you actually look at the data, people who are in the class and in the quizzes typically do quite well.

The lesser equal to two as a grade is conservative.

The people who are not in class but do the quizzes typically do not quite as good but a bit worse.

And for the remainder who is only watching, life is tough.

There's a relatively good correlation between not taking the quizzes and not doing the homeworks

and showing up again at the retake.

It's not automatic but there's a good correlation.

I'm not saying it's causal but you should think about this.

It has me a bit worried because I don't like failing half of the class.

But somehow there's a big difference here which has me worried.

So everybody who is not here or not taking the quiz, you should think what that means for you.

Any questions so far?

Okay, so we're doing constraint propagation.

Constraint propagation is an inference technique.

And inference means that instead of doing search at the level of states and actions

or even meta-states which is kind of what the factor representations are, meta-actions,

but basically going one level higher and doing search at the level of tighter and tighter and tighter

but equivalent problem formulations.

We have a language which consists of variables, the domains and constraints

and we actually reason about tighter and tighter descriptions.

Which is by the way what you are doing when you are doing Sudoku.

And in a way humans are very language based animals.

That's what we are good at.

We invent languages for stuff and then we use the languages to think about things.

And the math we are doing here is just a particularly well suited language for this kind of thinking.

And so we've been talking about inference procedures,

our consistency and forward checking in particular.

And we've been kind of interleaving inference with search.

We're doing inference whenever we can because that actually gets rid of a lot of search.

And when we can't make progress with inference, we make some search like decisions

which means we systematically have to test all the possible instantiations in this way.

Which means instead of doing algorithmically something like this,

spreading out and becoming exponential.

So generally the better our inference is, the better our overall performance.

And we've basically looked at a couple of ways of looking at the structure of the constraint graph to do better.

We had this idea of if we have non-connected components, we can solve them individually.

And since we're generally exponential in the number of nodes,

and if we have n components, each of the components is about on average 1 over n many nodes.

So we get exponential in 1 over n, which is a good thing.

And we've done the numbers for a couple of things.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:20:42 Min

Aufnahmedatum

2024-12-03

Hochgeladen am

2024-12-04 00:29:09

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen